70 research outputs found

    Influence of Big Data in managing cyber assets

    Get PDF
    © 2019, Emerald Publishing Limited. Purpose: Today, Big Data plays an imperative role in the creation, maintenance and loss of cyber assets of organisations. Research in connection to Big Data and cyber asset management is embryonic. Using evidence, the purpose of this paper is to argue that asset management in the context of Big Data is punctuated by a variety of vulnerabilities that can only be estimated when characteristics of such assets like being intangible are adequately accounted for. Design/methodology/approach: Evidence for the study has been drawn from interviews of leaders of digital transformation projects in three organisations that are within the insurance industry, natural gas and oil, and manufacturing industries. Findings: By examining the extant literature, the authors traced the type of influence that Big Data has over asset management within organisations. In a context defined by variability and volume of data, it is unlikely that the authors will be going back to restricting data flows. The focus now for asset managing organisations would be to improve semantic processors to deal with the vast array of data in variable formats. Research limitations/implications: Data used as evidence for the study are based on interviews, as well as desk research. The use of real-time data along with the use of quantitative analysis could lead to insights that have hitherto eluded the research community. Originality/value: There is a serious dearth of the research in the context of innovative leadership in dealing with a threatened asset management space. Interpreting creative initiatives to deal with a variety of risks to data assets has clear value for a variety of audiences

    Database independent Migration of Objects into an Object-Relational Database

    Get PDF
    This paper reports on the CERN-based WISDOM project which is studying the serialisation and deserialisation of data to/from an object database (objectivity) and ORACLE 9i.Comment: 26 pages, 18 figures; CMS CERN Conference Report cr02_01

    Role of Knowledge Creation and Absorptive Capacity: A Panel Data Study of Innovation

    Get PDF
    Purpose- Knowledge creation refers to the ability of firms to create new knowledge that starts from individuals to integrating the firms and then the overall economy. This study suggests that knowledge acquisition in a country has a significant relationship with innovative performance. Design/Methodology- Data from 48 highly HDI countries is taken from World Bank and World Economic Forum. Based on 480 country-year observations in a panel mediator model, it is revealed that the national efforts of boosting knowledge acquisition influence the firms’ innovative performance. Findings- Further, it is found that absorptive capacity in the employability of knowledgeable workers works as a mediator between knowledge acquisition and innovation. Whereby higher knowledge acquisition leads to higher absorptive capacity and higher innovation. Practical Implications- This study builds a quantitative model for the macroeconomic context of knowledge-based view

    Scientific Workflow Repeatability through Cloud-Aware Provenance

    Full text link
    The transformations, analyses and interpretations of data in scientific workflows are vital for the repeatability and reliability of scientific workflows. This provenance of scientific workflows has been effectively carried out in Grid based scientific workflow systems. However, recent adoption of Cloud-based scientific workflows present an opportunity to investigate the suitability of existing approaches or propose new approaches to collect provenance information from the Cloud and to utilize it for workflow repeatability in the Cloud infrastructure. The dynamic nature of the Cloud in comparison to the Grid makes it difficult because resources are provisioned on-demand unlike the Grid. This paper presents a novel approach that can assist in mitigating this challenge. This approach can collect Cloud infrastructure information along with workflow provenance and can establish a mapping between them. This mapping is later used to re-provision resources on the Cloud. The repeatability of the workflow execution is performed by: (a) capturing the Cloud infrastructure information (virtual machine configuration) along with the workflow provenance, and (b) re-provisioning the similar resources on the Cloud and re-executing the workflow on them. The evaluation of an initial prototype suggests that the proposed approach is feasible and can be investigated further.Comment: 6 pages; 5 figures; 3 tables in Proceedings of the Recomputability 2014 workshop of the 7th IEEE/ACM International Conference on Utility and Cloud Computing (UCC 2014). London December 201

    The Requirements for Ontologies in Medical Data Integration: A Case Study

    Full text link
    Evidence-based medicine is critically dependent on three sources of information: a medical knowledge base, the patients medical record and knowledge of available resources, including where appropriate, clinical protocols. Patient data is often scattered in a variety of databases and may, in a distributed model, be held across several disparate repositories. Consequently addressing the needs of an evidence-based medicine community presents issues of biomedical data integration, clinical interpretation and knowledge management. This paper outlines how the Health-e-Child project has approached the challenge of requirements specification for (bio-) medical data integration, from the level of cellular data, through disease to that of patient and population. The approach is illuminated through the requirements elicitation and analysis of Juvenile Idiopathic Arthritis (JIA), one of three diseases being studied in the EC-funded Health-e-Child project.Comment: 6 pages, 1 figure. Presented at the 11th International Database Engineering & Applications Symposium (Ideas2007). Banff, Canada September 200

    Reproducibility of scientific workflows execution using cloud-aware provenance (ReCAP)

    Get PDF
    © 2018, Springer-Verlag GmbH Austria, part of Springer Nature. Provenance of scientific workflows has been considered a mean to provide workflow reproducibility. However, the provenance approaches adopted so far are not applicable in the context of Cloud because the provenance trace lacks the Cloud information. This paper presents a novel approach that collects the Cloud-aware provenance and represents it as a graph. The workflow execution reproducibility on the Cloud is determined by comparing the workflow provenance at three levels i.e., workflow structure, execution infrastructure and workflow outputs. The experimental evaluation shows that the implemented approach can detect changes in the provenance traces and the outputs produced by the workflow
    • …
    corecore